Modeling of Feedforward Neural Network in PAHRA Architecture

نویسنده

  • LIBERIOS VOKOROKOS
چکیده

One of the most popular neural networks are multilayered feedforward neural networks, which represent the most standard configuration of biological inspired mathematical models of simplified neural system. These networks represent massive parallel systems with a high number of simple process elements and therefore it is natural to try to implement this kind of systems on parallel computer architecture. The parallel architecture described in this article provides flexible platform for simulation of multilayered feedforward neural networks trained with back-propagation algorithm. The computation model of given architecture allows formally describe components of parallel implementation of neural network and provides mathematical tool for verification of system performance. Key-Words: multilayered feedforward neural network, parallel computer, PAHRA, processing element, computational model

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Training Set Parallelism in Pahra Architecture

Multilayered feed-forward neural networks trained with back-propagation algorithm are one of the most popular “online” artificial neural networks. These networks are showing strong inherit parallelism because of the influence of high number of simple computational elements. So it is natural to try to implement this kind of parallelism on parallel computer architecture. The Parallel Hybrid Ring ...

متن کامل

Double Robustness Analysis for Determining Optimal Feedforward Neural Network Architecture

This paper incorporates robustness into neural network modeling and proposes a novel two-phase robustness analysis approach for determining the optimal feedforward neural network (FNN) architecture in terms of Hellinger distance of probability density function (PDF) of error distribution. The proposed approach is illustrated with an example in this paper.

متن کامل

Neural Network Architecture for 3D Object Representation

The paper discusses a neural network architecture for 3D object modeling. A multi-layered feedforward structure having as inputs the 3D-coordinates of the object points is employed to model the object space. Cascaded with a transformation neural network module, the proposed architecture can be used to generate and train 3D objects, perform transformations, set operations and object morphing. A ...

متن کامل

Predicting the coefficients of the Daubert and Danner correlation using a neural network model

In the present research, three different architectures were investigated to predict the coefficients of the Daubert and Danner equation for calculation of saturated liquid density. The first architecture with 4 network input parameters including critical temperature, critical pressure, critical volume and molecular weight, the second architecture with 6 network input parameters including the on...

متن کامل

Compact Feedforward Sequential Memory Networks for Large Vocabulary Continuous Speech Recognition

In acoustic modeling for large vocabulary continuous speech recognition, it is essential to model long term dependency within speech signals. Usually, recurrent neural network (RNN) architectures, especially the long short term memory (LSTM) models, are the most popular choice. Recently, a novel architecture, namely feedforward sequential memory networks (FSMN), provides a non-recurrent archite...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009